Paquetes para matemáticas
March 1, 2025
Entre otras ventajas, los algoritmos matemáticos en julia pueden quedar escritos de manera muy agradable
LinearAlgebra.jlEs parte de Standard Library
tr(A) = 3
det(A) = 104.0
inv(A) =
3×3 Matrix{Float64}:
-0.451923 0.211538 0.0865385
0.365385 -0.192308 0.0576923
0.240385 0.0576923 -0.0673077
La sintaxis para sistemas es la habitual
3-element Vector{Float64}:
0.2307692307692308
0.15384615384615383
0.15384615384615385
| Type | Description |
|---|---|
| Symmetric | Symmetric matrix |
| Hermitian | Hermitian matrix |
| UpperTriangular | Upper triangular matrix |
| UnitUpperTriangular | Upper triangular matrix with unit diagonal |
| LowerTriangular | Lower triangular matrix |
| UnitLowerTriangular | Lower triangular matrix with unit diagonal |
| UpperHessenberg | Upper Hessenberg matrix |
| Tridiagonal | Tridiagonal matrix |
| SymTridiagonal | Symmetric tridiagonal matrix |
| Bidiagonal | Upper/lower bidiagonal matrix |
| Diagonal | Diagonal matrix |
| UniformScaling | Uniform scaling operator |
3×3 UnitUpperTriangular{Int64, Matrix{Int64}}:
1 2 3
⋅ 1 3
⋅ ⋅ 1
lu permite exportar L, U y vector de permutaciones, o matriz de permutaciones
LU{Float64, Matrix{Float64}, Vector{Int64}}
L factor:
3×3 Matrix{Float64}:
1.0 0.0 0.0
0.236216 1.0 0.0
0.394544 0.0794943 1.0
U factor:
3×3 Matrix{Float64}:
0.655203 0.044715 0.891312
0.0 0.845552 0.619651
0.0 0.0 -0.300705
Julia incluso puede elegir una “buena” factorización para nuestro problema
LU{Float64, Matrix{Float64}, Vector{Int64}}
L factor:
3×3 Matrix{Float64}:
1.0 0.0 0.0
0.236216 1.0 0.0
0.394544 0.0794943 1.0
U factor:
3×3 Matrix{Float64}:
0.655203 0.044715 0.891312
0.0 0.845552 0.619651
0.0 0.0 -0.300705
Cholesky{Float64, Matrix{Float64}}
U factor:
3×3 UpperTriangular{Float64, Matrix{Float64}}:
0.721159 0.254776 1.02389
⋅ 0.822935 0.60544
⋅ ⋅ 0.280712
Volveremos sobre la importancia de factorizar en la próxima sesión.
Para sistemas “difíciles” o cuando queramos la máxima velocidad, podemos utilizar el paquete LinearSolve.jl
4-element Vector{Float64}:
-3.64416159092752
9.087282877480247
-9.439806345128664
2.170120487852354
Implementa muchos resolvedores optimizados y tiene, además, un sistema de selección de solvers adecuados.
Video explicativo en Youtube
Random.jl es parte de Standard Library
rand() = 0.9735270538098799
rand((-1, 2)) = -1
rand(5:10) = 5
Statistics.jl es parte de Standard Library.
Es reseñable la familia JuliaStats: DataFrames, Distributions, HypothesisTests, TimeSeries, …
The Distributions package provides a large collection of probabilistic distributions and related functions. Particularly, Distributions implements:
mean(d) = 0.0
rand(d, 3) = [0.4080618199840492, -2.0519555538915393, 1.5910101149355376]
3-element Vector{Float64}:
0.4080618199840492
-2.0519555538915393
1.5910101149355376
Ver también MeasureTheory.jl
Turing is a general-purpose probabilistic programming language for robust, efficient Bayesian inference and decision making. Current features include:
- General-purpose probabilistic programming with an intuitive modelling interface;
- Robust, efficient Hamiltonian Monte Carlo (HMC) sampling for differentiable posterior distributions;
- Particle MCMC sampling for complex posterior distributions involving discrete variables and stochastic control flow; and
- Compositional inference via Gibbs sampling that combines particle MCMC, HMC and random-walk MH (RWMH).
Plots.jlEn un notebook se puede hacer
SciML is the combination of scientific computing techniques with machine learning
using DifferentialEquations
f(u,p,t) = 1.01*u
u0 = 1/2
tspan = (0.0,1.0)
prob = ODEProblem(f,u0,tspan)
sol = solve(prob, Tsit5(), reltol=1e-8, abstol=1e-8)
using Plots
plot(sol,linewidth=5,title="Solution to the linear ODE with a thick line",
xaxis="Time (t)",yaxis="u(t) (in μm)",label="My Thick Line!") # legend=false
plot!(sol.t, t->0.5*exp(1.01t),lw=3,ls=:dash,label="True Solution!")julia implementa por defecto Tsit5 en lugar de Dormund-Prince
ModelingToolkit.jl is a modeling language for high-performance symbolic-numeric computation in scientific computing and scientific machine learning. It then mixes ideas from symbolic computational algebra systems with causal and acausal equation-based modeling frameworks to give an extendable and parallel modeling system. It allows for users to give a high-level description of a model for symbolic preprocessing to analyze and enhance the model. Automatic symbolic transformations, such as index reduction of differential-algebraic equations, make it possible to solve equations that are impossible to solve with a purely numeric-based technique.
ModelingToolkit.jl una capa más sobre DifferentialEquations.jlusing ModelingToolkit, Plots
using ModelingToolkit: t_nounits as t, D_nounits as D
@mtkmodel FOL begin
@parameters begin
τ = 3.0 # parameters
end
@variables begin
x(t) = 0.0 # dependent variables
end
@equations begin
D(x) ~ (1 - x) / τ
end
end
using OrdinaryDiffEq
@mtkbuild fol = FOL()
prob = ODEProblem(fol, [], (0.0, 10.0), [])
sol = solve(prob)
plot(sol)JumpProcesses.jlFeNICS.jlFEniCS.jl is a wrapper for the FEniCS library for finite element discretizations of PDEs. This wrapper includes three parts:
Installation and direct access to FEniCS via a Conda installation. Alternatively one may use their current FEniCS installation. A low-level development API and provides some functionality to make directly dealing with the library a little bit easier, but still requires knowledge of FEniCS itself. Interfaces have been provided for the main functions and their attributes, and instructions to add further ones can be found here. A high-level API for usage with DifferentialEquations. An example can be seen solving the heat equation with high order adaptive timestepping.
Gridap.jlGridap provides a set of tools for the grid-based approximation of partial differential equations (PDEs) written in the Julia programming language. The main motivation behind the development of this library is to provide an easy-to-use framework for the development of complex PDE solvers in a dynamically typed style without sacrificing the performance of statically typed languages. The library currently supports linear and nonlinear PDE systems for scalar and vector fields, single and multi-field problems, conforming and nonconforming finite element discretizations, on structured and unstructured meshes of simplices and hexahedra.
It has a very reach library of worked examples
JuliaFEM.jlThe JuliaFEM project develops open-source software for reliable, scalable, distributed Finite Element Method.
Trixi.jlJuMP is a domain-specific modeling language for mathematical optimization embedded in Julia. It currently supports a number of open-source and commercial solvers for a variety of problem classes, including linear, mixed-integer, second-order conic, semidefinite, and nonlinear programming.
JuMP is a package for Julia. From Julia, JuMP is installed by using the built-in package manager.
You also need to include a Julia package which provides an appropriate solver. One such solver is HiGHS.Optimizer, which is provided by the HiGHS.jl package.
Even if your problem is differentiable, if it is unconstrained there is limited benefit (and downsides in the form of more overhead) to using JuMP over tools which are only concerned with function minimization.
\[ \begin{equation} \left[ \begin{array}{c} x \\ y \\ \end{array} \right] \end{equation} \]
. . .
\[ \begin{equation} g\left( x \right) + x \frac{\mathrm{d} g\left( x \right)}{\mathrm{d}x} \end{equation} \]
. . .
\[ \begin{equation} x + x \frac{\mathrm{d}}{\mathrm{d}x} x \end{equation} \]
. . .
### Limitaciones
There is a list available of known missing features.
SymPy is a Python library for symbolic mathematics.
This package is a wrapper using PyCall
With the excellent PyCall package of julia, one has access to the many features of the SymPy library from within a Julia session.
It requires Python to run in the background. To install it sometimes it is needed to make a few
The only point to this package is that using PyCall to access SymPy is somewhat cumbersome.
https://commons.wikimedia.org/wiki/File:Colored_neural_network.svg
\(x_1, x_2, \cdots\) son las entradas
\(y_1, y_2, \cdots\) son las salidas
\(w_{11}, w_{12}, \cdots\) son los
Se calcula \(\displaystyle b_1 = w_{11} x_1 + w_{21} x_2 + w_{31} x_3 + \cdots\)
Y luego \(y_1 = f ( b_1 )\)
\(f\) es la llamada
Denotamos \(\mathbf x = (x_1, \cdots, x_p)\), \(\mathbf y = (y_1, \cdots, y_q)\), \(\mathbf W = (w_{11}, w_{12}, \cdots )\)
Se pueden hacer redes
de varias capas:
la salida de una
es la entrada de la otra.
Si tenemos muchos pares de entradas y sus respectivas salidas
\[ \widehat{\mathbf y_i} = F(\mathbf x_i, \mathbf W) \]
Buscar los mejores pesos \(\mathbf W\) se llama
entrenar la red
Hay muchas generalizaciones de esta idea: redes profundas (muchas capas), redes convolucionales, …
¿Cómo de potente es esto? Vídeo de Google DeepMind
Al algoritmo para, dados los pesos y la entrada calcular la salida se lo llama forward propagation.
Se separan los datos en un conjunto de entrenamiento y conjunto de pruebas
Para ajustar la red tenemos que elegir una función de pérdida \(d({\mathbf y}, \widehat{\mathbf y})\). Por ejemplo \(|{\mathbf y} - \widehat{\mathbf y}|^2\).
A partir de ella elegimos una función de coste, que tenga en cuenta todos las pérdidas. Por ejemplo el error cuadrático medio \[ J(W) = \frac{1}{N} \sum_{i=1}^N |{\mathbf y_i} - \widehat{\mathbf y_i}|^2 \]
Así:
Hay múltiples opciones para la elección de función de coste y hacer la optimización eficientemente es gran parte de la dificultad.
Para optimizar se utilizan múltiples procedimientos.
El ejemplo más sencillo es el descenso por gradiente \[ W_{n+1} = W_n - \gamma \Delta J (W_n). \]
Para calcular \(\nabla J\) se emplea la regla de cadena. A este proceso lo llama back-progation.
Flux es una de las mejores librerías de ML en Julia.
Curso de introducción a julia. David Gómez-Castro (UAM)